520 research outputs found

    An Optimized Dynamic Mode Decomposition Model Robust to Multiplicative Noise

    Full text link
    Dynamic mode decomposition (DMD) is an efficient tool for decomposing spatio-temporal data into a set of low-dimensional modes, yielding the oscillation frequencies and the growth rates of physically significant modes. In this paper, we propose a novel DMD model that can be used for dynamical systems affected by multiplicative noise. We first derive a maximum a posteriori (MAP) estimator for the data-based model decomposition of a linear dynamical system corrupted by certain multiplicative noise. Applying penalty relaxation to the MAP estimator, we obtain the proposed DMD model whose epigraphical limits are the MAP estimator and the conventional optimized DMD model. We also propose an efficient alternating gradient descent method for solving the proposed DMD model, and analyze its convergence behavior. The proposed model is demonstrated on both the synthetic data and the numerically generated one-dimensional combustor data, and is shown to have superior reconstruction properties compared to state-of-the-art DMD models. Considering that multiplicative noise is ubiquitous in numerous dynamical systems, the proposed DMD model opens up new possibilities for accurate data-based modal decomposition.Comment: 35 pages, 10 figure

    Pop2Piano : Pop Audio-based Piano Cover Generation

    Full text link
    The piano cover of pop music is widely enjoyed by people. However, the generation task of the pop piano cover is still understudied. This is partly due to the lack of synchronized {Pop, Piano Cover} data pairs, which made it challenging to apply the latest data-intensive deep learning-based methods. To leverage the power of the data-driven approach, we make a large amount of paired and synchronized {pop, piano cover} data using an automated pipeline. In this paper, we present Pop2Piano, a Transformer network that generates piano covers given waveforms of pop music. To the best of our knowledge, this is the first model to directly generate a piano cover from pop audio without melody and chord extraction modules. We show that Pop2Piano trained with our dataset can generate plausible piano covers

    On the linear convergence of additive Schwarz methods for the pp-Laplacian

    Full text link
    We consider additive Schwarz methods for boundary value problems involving the pp-Laplacian. While the existing theoretical estimates for the convergence rate of additive Schwarz methods for the pp-Laplacian are sublinear, the actual convergence rate observed by numerical experiments is linear. In this paper, we bridge the gap between these theoretical and numerical results by analyzing the linear convergence rate of additive Schwarz methods for the pp-Laplacian. In order to estimate the linear convergence rate of the methods, we present two essential components. Firstly, we present a new abstract convergence theory of additive Schwarz methods written in terms of a quasi-norm. This quasi-norm exhibits behavior similar to the Bregman distance of the convex energy functional associated to the problem. Secondly, we provide a quasi-norm version of the Poincar'{e}--Friedrichs inequality, which is essential for deriving a quasi-norm stable decomposition for a two-level domain decomposition setting.Comment: 23 pages, 2 figure

    Balanced Group Convolution: An Improved Group Convolution Based on Approximability Estimates

    Full text link
    The performance of neural networks has been significantly improved by increasing the number of channels in convolutional layers. However, this increase in performance comes with a higher computational cost, resulting in numerous studies focused on reducing it. One promising approach to address this issue is group convolution, which effectively reduces the computational cost by grouping channels. However, to the best of our knowledge, there has been no theoretical analysis on how well the group convolution approximates the standard convolution. In this paper, we mathematically analyze the approximation of the group convolution to the standard convolution with respect to the number of groups. Furthermore, we propose a novel variant of the group convolution called balanced group convolution, which shows a higher approximation with a small additional computational cost. We provide experimental results that validate our theoretical findings and demonstrate the superior performance of the balanced group convolution over other variants of group convolution.Comment: 26pages, 2 figure

    Parareal Neural Networks Emulating a Parallel-in-time Algorithm

    Full text link
    As deep neural networks (DNNs) become deeper, the training time increases. In this perspective, multi-GPU parallel computing has become a key tool in accelerating the training of DNNs. In this paper, we introduce a novel methodology to construct a parallel neural network that can utilize multiple GPUs simultaneously from a given DNN. We observe that layers of DNN can be interpreted as the time step of a time-dependent problem and can be parallelized by emulating a parallel-in-time algorithm called parareal. The parareal algorithm consists of fine structures which can be implemented in parallel and a coarse structure which gives suitable approximations to the fine structures. By emulating it, the layers of DNN are torn to form a parallel structure which is connected using a suitable coarse network. We report accelerated and accuracy-preserved results of the proposed methodology applied to VGG-16 and ResNet-1001 on several datasets
    • …
    corecore